Convergence of a Distributed Least Squares
نویسندگان
چکیده
In this paper, we consider a least-squares (LS)-based distributed algorithm build on sensor network to estimate an unknown parameter vector of dynamical system, where each in the has partial information only but is allowed communicate with its neighbors. Our main task generalize well-known theoretical results traditional LS current case by establishing both upper bound accumulated regrets adaptive predictor and convergence estimator, following key features compared existing literature estimation: Firstly, our theory does not need previously imposed independence, stationarity or Gaussian property system signals, hence applicable stochastic systems feedback. Secondly, cooperative excitation condition introduced used paper for weakest possible one, which shows that even if any individual cannot LS, whole can still fulfill estimation LS.
منابع مشابه
Convergence analyses of Galerkin least - squares methods
Symmetric advective-di usive forms of the Stokes and incompressible Navier-Stokes equations are presented. The Galerkin least-squares method for advective-di usive equations is used for both systems and is related to other stabilized methods previously studied. The presentation reveals that the convergence analysis for advective-di usive equations, as applied before to a linearized form of the ...
متن کاملDistributed Learning with Regularized Least Squares
We study distributed learning with the least squares regularization scheme in a reproducing kernel Hilbert space (RKHS). By a divide-and-conquer approach, the algorithm partitions a data set into disjoint data subsets, applies the least squares regularization scheme to each data subset to produce an output function, and then takes an average of the individual output functions as a final global ...
متن کاملUnifying Least Squares, Total Least Squares and Data Least Squares
The standard approaches to solving overdetermined linear systems Ax ≈ b construct minimal corrections to the vector b and/or the matrix A such that the corrected system is compatible. In ordinary least squares (LS) the correction is restricted to b, while in data least squares (DLS) it is restricted to A. In scaled total least squares (Scaled TLS) [15], corrections to both b and A are allowed, ...
متن کاملOn the Convergence of the Generalized Linear Least Squares Algorithm
This paper considers the issue of parameter estimation for biomedical applications using nonuniformly sampled data. The generalized linear least squares (GLLS) algorithm, first introduced by Feng and Ho (1993), is used in the medical imaging community for removal of bias when the data defining the model are correlated. GLLS provides an efficient iterative linear algorithm for the solution of th...
متن کاملConvergence of Common Proximal Methods for L1-Regularized Least Squares
We compare the convergence behavior of ADMM (alternating direction method of multipliers), [F]ISTA ([fast] iterative shrinkage and thresholding algorithm) and CD (coordinate descent) methods on the model `1-regularized least squares problem (aka LASSO). We use an eigenanalysis of the operators to compare their local convergence rates when close to the solution. We find that, when applicable, CD...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Automatic Control
سال: 2021
ISSN: ['0018-9286', '1558-2523', '2334-3303']
DOI: https://doi.org/10.1109/tac.2020.3047989